Knox profile for website for Steve Carden

Greg Detre

 

A bold statement: people are machines, though machines aren't yet people. If you accept that, there's no reason in principle why machines couldn't be built to work the same way our minds do. Or, indeed, why people's brains couldn't be augmented, allowing us to remember more, experience the world in different ways, or handle more complexity. This is the stuff of artificial intelligence researchers� dreams.

As an undergraduate, I had looked at the biological and philosophical aspects of mind, (e.g. how neurons work, or whether it makes sense to talk of a mind existing independent of a body). This year, I�ve focused on the computational modelling side of things � how we might go about actually building a mind on a computer. I�ve taken introductory computer science courses to fill in any rudimentary gaps in my knowledge, honed my programming skills and worked on a couple of medium-sized independent programming projects. I�ve also taken courses which look at very high-level cognitive problems (like what it means to have �common sense�) and how we can try and write programs that can learn and utilise such knowledge, either from the world around them, or directly from human trainers (see for example: http://commonsense.media.mit.edu/cgi-bin/search.cgi).

More recently, I�ve been looking at the way that people see analogies between problems and ideas in very different domains, and the central role that this plays in our thinking. One of the things that AI researchers have learned is that often the mental processes that seem simple or obvious because they feel so natural and easy to us are the hardest to model, because they mask a huge submerged, subconscious mass of low-level processes. It turns out that even very simple analogy problems like:

abc : abd �� ::���� ijkk : ?

are very tricky to write as computer programs indeed.

Next year, I�m hoping to take a position as a research assistant, trying to build biologically-realistic models of parts of our brain.